Talking face generation aims at generating photo-realistic video portraits of a target person driven by input audio. Due to its nature of one-to-many mapping from the input audio to the output video (e.g., one speech content may have multiple feasible visual appearances), learning a deterministic mapping like previous works brings ambiguity during training, and thus causes inferior visual results. Although this one-to-many mapping could be alleviated in part by a two-stage framework (i.e., an audio-to-expression model followed by a neural-rendering model), it is still insufficient since the prediction is produced without enough information (e.g., emotions, wrinkles, etc.). In this paper, we propose MemFace to complement the missing information with an implicit memory and an explicit memory that follow the sense of the two stages respectively. More specifically, the implicit memory is employed in the audio-to-expression model to capture high-level semantics in the audio-expression shared space, while the explicit memory is employed in the neural-rendering model to help synthesize pixel-level details. Our experimental results show that our proposed MemFace surpasses all the state-of-the-art results across multiple scenarios consistently and significantly.
translated by 谷歌翻译
Deep learning methods have contributed substantially to the rapid advancement of medical image segmentation, the quality of which relies on the suitable design of loss functions. Popular loss functions, including the cross-entropy and dice losses, often fall short of boundary detection, thereby limiting high-resolution downstream applications such as automated diagnoses and procedures. We developed a novel loss function that is tailored to reflect the boundary information to enhance the boundary detection. As the contrast between segmentation and background regions along the classification boundary naturally induces heterogeneity over the pixels, we propose the piece-wise two-sample t-test augmented (PTA) loss that is infused with the statistical test for such heterogeneity. We demonstrate the improved boundary detection power of the PTA loss compared to benchmark losses without a t-test component.
translated by 谷歌翻译
连续的关系提取(CRE)要求该模型不断从课堂收入数据流中学习新关系。在本文中,我们提出了一种令人沮丧的简单但有效的方法(FEA)方法,其中有两个学习阶段的CRE:1)快速适应(FA)仅使用新数据加热模型。 2)平衡调整(BT)列出平衡内存数据上的模型。尽管它很简单,但FEA与最先进的基线相比,FEA取得了可比性(在诱人或优越(在少数情况下)性能。通过仔细的检查,我们发现新关系之间的数据失衡会导致偏斜的决策边界在预计编码器上的头部分类器中,从而损害了整体性能。在FEA中,FA阶段释放了后续填充的内存数据的潜力,而BT阶段有助于建立更平衡的决策边界。通过统一的视图,我们,我们发现可以将两个强大的CRE基准列入提议的培训管道中。FEEA的成功还为CRE中的未来模型设计提供了可行的见解和建议。
translated by 谷歌翻译
使用增强现实(AR)用于导航目的,这表明在手术手术过程中协助医生有益。这些应用通常需要知道外科手术工具和患者的姿势,以提供外科医生在任务执行过程中可以使用的视觉信息。现有的医学级跟踪系统使用放置在手术室内的红外摄像头(OR)来识别感兴趣的对象附加并计算其姿势的复古反射标记。一些市售的AR头式显示器(HMD)使用类似的摄像头进行自定位,手动跟踪和估算对象的深度。这项工作提出了一个使用AR HMD的内置摄像机来准确跟踪复古反射标记的框架,例如在手术过程中使用的标记,而无需集成任何其他组件。该框架还能够同时跟踪多个工具。我们的结果表明,横向翻译的准确度为0.09 +-0.06毫米,可以实现标记的跟踪和检测,纵向翻译的0.42 +-0.32 mm,绕垂直轴旋转的0.80 +-0.39 ver。此外,为了展示所提出的框架的相关性,我们在手术程序的背景下评估了系统的性能。该用例旨在在骨科过程中复制K-Wire插入的场景。为了进行评估,为两名外科医生和一名生物医学研究人员提供了视觉导航,每次都进行了21次注射。该用例的结果提供了与基于AR的导航程序报告的相当精度。
translated by 谷歌翻译
基于对比度学习的基于自我监督的骨架识别引起了很多关注。最近的文献表明,数据增强和大量对比度对对于学习此类表示至关重要。在本文中,我们发现,基于正常增强的直接扩展对对比对的表现有限,因为随着培训的进展,对比度对从正常数据增强到损失的贡献越小。因此,我们深入研究了对比对比对的,以进行对比学习。由混合增强策略的成功激励,通过综合新样本来改善许多任务的执行,我们提出了Skelemixclr:一种与时空的学习框架,具有时空骨架混合增强(Skelemix),以补充当前的对比样品,以补充当前的对比样品。首先,Skelemix利用骨架数据的拓扑信息将两个骨骼序列混合在一起,通过将裁切的骨骼片段(修剪视图)与其余的骨架序列(截断视图)随机梳理。其次,应用时空掩码池在特征级别上分开这两个视图。第三,我们将对比度对与这两种观点扩展。 SkelemixClr利用修剪和截断的视图来提供丰富的硬对比度对,因为它们由于图形卷积操作而涉及彼此的某些上下文信息,这使模型可以学习更好的运动表示以进行动作识别。在NTU-RGB+D,NTU120-RGB+D和PKU-MMD数据集上进行了广泛的实验表明,SkelemixClr实现了最先进的性能。代码可在https://github.com/czhaneva/skelemixclr上找到。
translated by 谷歌翻译
被遮挡的人重新识别是一个具有挑战性的任务,因为某些场景中的某些障碍(例如树木,汽车和行人)封闭人体部分。一些现有的姿势引导方法通过根据图形匹配对准身体部位来解决这个问题,但这些基于图的方法不直观和复杂。因此,我们提出了一种基于变压器的姿态引导特征解除留出(PFD)方法,通过利用姿势信息来清楚地解散语义部件(例如人体或关节部件)并相应地选择性地匹配非封闭部分。首先,视觉变压器(VIV)用于提取具有强大功能的贴片功能。其次,为了从补丁信息预先解散姿势信息,匹配和分配机制在姿势引导特征聚合(PFA)模块中利用。第三,在变压器解码器中引入了一组学习的语义视图,以隐式增强解除戒备的身体部位特征。然而,没有额外监督,那些语义视图并不保证与身体相关。因此,提出了姿势视图匹配(PVM)模块以明确匹配可见的身体部位并自动分离遮挡功能。第四,为了更好地防止闭塞的干扰,我们设计了一个姿势引导的推动损失,强调了可见的身体部位的特征。对于两个任务(封闭和整体RE-ID)的五个具有挑战性的数据集进行了广泛的实验表明,我们提出的PFD具有优越的承诺,这对最先进的方法表现了有利的方法。代码可在https://github.com/wangtaoas/pfd_net上获得
translated by 谷歌翻译
Multimodal deep learning has been used to predict clinical endpoints and diagnoses from clinical routine data. However, these models suffer from scaling issues: they have to learn pairwise interactions between each piece of information in each data type, thereby escalating model complexity beyond manageable scales. This has so far precluded a widespread use of multimodal deep learning. Here, we present a new technical approach of "learnable synergies", in which the model only selects relevant interactions between data modalities and keeps an "internal memory" of relevant data. Our approach is easily scalable and naturally adapts to multimodal data inputs from clinical routine. We demonstrate this approach on three large multimodal datasets from radiology and ophthalmology and show that it outperforms state-of-the-art models in clinically relevant diagnosis tasks. Our new approach is transferable and will allow the application of multimodal deep learning to a broad set of clinically relevant problems.
translated by 谷歌翻译
Most recent studies on neural constituency parsing focus on encoder structures, while few developments are devoted to decoders. Previous research has demonstrated that probabilistic statistical methods based on syntactic rules are particularly effective in constituency parsing, whereas syntactic rules are not used during the training of neural models in prior work probably due to their enormous computation requirements. In this paper, we first implement a fast CKY decoding procedure harnessing GPU acceleration, based on which we further derive a syntactic rule-based (rule-constrained) CKY decoding. In the experiments, our method obtains 95.89 and 92.52 F1 on the datasets of PTB and CTB respectively, which shows significant improvements compared with previous approaches. Besides, our parser achieves strong and competitive cross-domain performance in zero-shot settings.
translated by 谷歌翻译
The success of Deep Learning applications critically depends on the quality and scale of the underlying training data. Generative adversarial networks (GANs) can generate arbitrary large datasets, but diversity and fidelity are limited, which has recently been addressed by denoising diffusion probabilistic models (DDPMs) whose superiority has been demonstrated on natural images. In this study, we propose Medfusion, a conditional latent DDPM for medical images. We compare our DDPM-based model against GAN-based models, which constitute the current state-of-the-art in the medical domain. Medfusion was trained and compared with (i) StyleGan-3 on n=101,442 images from the AIROGS challenge dataset to generate fundoscopies with and without glaucoma, (ii) ProGAN on n=191,027 from the CheXpert dataset to generate radiographs with and without cardiomegaly and (iii) wGAN on n=19,557 images from the CRCMS dataset to generate histopathological images with and without microsatellite stability. In the AIROGS, CRMCS, and CheXpert datasets, Medfusion achieved lower (=better) FID than the GANs (11.63 versus 20.43, 30.03 versus 49.26, and 17.28 versus 84.31). Also, fidelity (precision) and diversity (recall) were higher (=better) for Medfusion in all three datasets. Our study shows that DDPM are a superior alternative to GANs for image synthesis in the medical domain.
translated by 谷歌翻译
Harvesting question-answer (QA) pairs from customer service chatlog in the wild is an efficient way to enrich the knowledge base for customer service chatbots in the cold start or continuous integration scenarios. Prior work attempts to obtain 1-to-1 QA pairs from growing customer service chatlog, which fails to integrate the incomplete utterances from the dialog context for composite QA retrieval. In this paper, we propose N-to-N QA extraction task in which the derived questions and corresponding answers might be separated across different utterances. We introduce a suite of generative/discriminative tagging based methods with end-to-end and two-stage variants that perform well on 5 customer service datasets and for the first time setup a benchmark for N-to-N DialogQAE with utterance and session level evaluation metrics. With a deep dive into extracted QA pairs, we find that the relations between and inside the QA pairs can be indicators to analyze the dialogue structure, e.g. information seeking, clarification, barge-in and elaboration. We also show that the proposed models can adapt to different domains and languages, and reduce the labor cost of knowledge accumulation in the real-world product dialogue platform.
translated by 谷歌翻译